The New York Times sues AI search firm Perplexity for unauthorized copying and distribution of its copyrighted news and videos, seeking an injunction and damages. This marks the newspaper's second copyright lawsuit against generative AI, following a case against OpenAI and Microsoft last year. The complaint alleges Perplexity's RAG technology outputs content nearly identical to the original, with over 175,000 crawl requests to the Times' site in ....
The 2025 annual hot words of NetEase Youdao Dictionary have been revealed, with "DeepSeek" topping the list with 8.67 million searches, becoming the first annual word originating from a domestic AI large model. The search popularity rapidly increased after the release of the DeepSeek-R1 model in February, and subsequent technological breakthroughs also drove peak query periods. College students and working professionals are the main search groups, and users often extend their browsing to related concepts such as "large models" after looking up words, forming a chain of "looking up words - learning concepts," reflecting the trend of AI technology popularization driving public awareness deepening.
Google's Gemini now identifies AI-generated images via user queries. Future plans include video/audio verification and integration into services like search, primarily using Google's proprietary technology.....
The domestic Kimi K2 Thinking model has successfully integrated with the globally renowned AI search application Perplexity, becoming the only domestic model to join the platform. This integration, occurring simultaneously with OpenAI's GPT-5.1, highlights the international competitiveness of domestic AI technology. Perplexity, a conversational answer engine established in 2022, has grown into the highest-valued AI search application globally, revolutionizing the way users access information.
Use advanced semantic search technology to find more relevant potential customers.
A search engine that utilizes facial recognition technology for in-depth profiling.
Create your own advanced search engine by leveraging AI technology.
Utilizes vector search technology to create a tool for searching relevant stocks based on descriptions.
Anthropic
$105
Input tokens/M
$525
Output tokens/M
200
Context Length
Google
$0.7
$2.8
1k
Alibaba
$8
$240
52
$54
$163
Tencent
$1
$4
32
-
Baidu
Openai
$0.4
128
Bytedance
$0.8
256
Chatglm
$16
Iflytek
$2
Xai
$21
Deepseek
cpatonn
Llama-3.3-Nemotron-Super-49B-v1.5 is a large language model based on Meta Llama-3.3-70B-Instruct. It has enhanced inference, chat preference, and agent task capabilities through multi-stage post-training. It uses neural architecture search technology to significantly improve efficiency while maintaining high accuracy, and supports a context length of 128K tokens and multi-language processing.
Mungert
A large language model based on Meta Llama-3.3-70B-Instruct, optimized through multi - stage training, performs excellently in tasks such as inference and chatting, supports multiple languages, and is suitable for various AI application scenarios. Optimized using neural architecture search technology, it can run efficiently on a single H100 - 80GB GPU.
gabriellarson
Llama-3.3-Nemotron-Super-49B-v1.5 is an efficient large language model developed by NVIDIA, derived from Meta Llama-3.3-70B-Instruct. This model performs excellently in inference, chat interaction, and agent tasks. It significantly reduces memory usage through neural architecture search technology and supports a context length of 128K tokens. Its capabilities in multiple aspects such as mathematics, code, science, and tool invocation have been enhanced.
nvidia
A large language model derived from Meta Llama-3.1-405B-Instruct, optimized through neural architecture search technology, supporting 128K tokens context length, suitable for reasoning, dialogue, and instruction-following tasks.
opensearch-project
A sparse retrieval model based on distillation technology, optimized for OpenSearch, supporting inference-free document encoding with improved search relevance and efficiency over V1
facebook
GENRE is a sequence-to-sequence-based entity retrieval system that employs a fine-tuned BART architecture and generates unique entity names through constrained beam search technology.
RegNet image classification model trained on ImageNet-1k, designed using neural architecture search technology
RegNet image classification model trained on the ImageNet-1k dataset, designed using neural architecture search technology
amazon
BORT is a highly compressed version of BERT-large, optimized through neural architecture search technology, achieving up to 10x faster inference speed while outperforming some uncompressed models.
The Biel.ai MCP server connects the IDE to product documentation and enables AI tools to access and search the company's knowledge base through RAG technology, providing intelligent code completion and technical Q&A.
A document retrieval system based on MongoDB Atlas vector search and Voyage AI embedding technology, supporting semantic search and text matching, including document chunking, embedding generation, and storage functions.
The Memvid MCP Server is an AI service interface based on video memory technology, supporting encoding content such as text and PDFs into video formats and providing semantic search and conversation functions.
YouTube MCP is an AI-based solution designed to enhance the YouTube content interaction experience through machine learning technology. It supports video search, subtitle access, and semantic search without using the official API.
This is an MCP server project that enables direct search and retrieval of SAP Notes/KB article content through SAP Passport certificate authentication and Playwright automated browser technology.
A privacy-friendly web search server based on MCP technology, using SearXNG to achieve multi-engine search functions, supporting multiple search categories and filtering options.
The Claude Project Coordinator is an MCP server for managing and coordinating multiple Xcode/Swift projects, providing project status tracking, code pattern search, and development knowledge base maintenance functions. It includes features such as security verification, project analysis, and automated technology detection.
A Godot documentation query assistant based on Retrieval Augmented Generation (RAG), achieving intelligent Q&A through vectorization technology and semantic search.
MCP Memory is a memory storage service built on Cloudflare Workers, providing cross - conversation memory functions for MCP clients and using vector search technology to achieve semantic - related memory retrieval.
Open Deep Research MCP Server is an AI - driven deep research assistant that conducts iterative deep research by combining search engines, web scraping, and AI technology to generate comprehensive reports. It supports two usage methods: MCP protocol and CLI, and has functions such as reliability assessment, scope control, and automatic generation of follow - up questions.
This project is a remote MCP server dedicated to querying the documentation of the ATproto protocol. It is deployed through Cloudflare and uses AutoRAG technology to extract information from the public ATproto documentation, providing developers with a convenient document search tool.
RagCode MCP is a privacy-first local AI code assistant that enables AI assistants to understand the entire codebase through semantic vector search and RAG technology. It supports multiple languages such as Go, PHP, and Python, without relying on the cloud.
An MCP server that provides Google Search and web content viewing functions, with advanced anti - crawler evasion technology
Apple RAG MCP is a retrieval-augmented generation system that provides Apple development expertise for AI agents. It integrates official Swift documentation, design guides, and Apple Developer YouTube content, and provides accurate technical answers through AI-driven hybrid search technology.
An intelligent search engine that combines LangChain, MCP protocol, RAG technology, and Ollama, supporting web search, information retrieval, and answer generation, with the ability to call local and cloud LLMs.
This is a PostgreSQL semantic search server based on the Model Context Protocol (MCP) standard. It enables AI assistants to understand the semantic structure of the database and execute natural language queries through vector embedding technology.
An experimental Google Search MCP server that provides enhanced search capabilities through web scraping technology, including advanced anonymization and anti - detection features, but there is a risk of being restricted by Google.
A web search MCP service based on pure crawler technology, supports Bing web and news searches, does not require an official API, and includes intelligent anti - detection and URL cleaning functions.
The Build Vault MCP server transforms The Build Podcast into a searchable knowledge base. Through hybrid search technology that combines semantic similarity and full-text search, it provides AI insights into business ideas, frameworks, and product strategies.
An intent - based Breville digital asset discovery MCP service that enables intelligent asset search and recommendation through natural language and AI technology